An accelerated inexact dampened augmented Lagrangian method for linearly-constrained nonconvex composite optimization problems

نویسندگان

چکیده

This paper proposes and analyzes an accelerated inexact dampened augmented Lagrangian (AIDAL) method for solving linearly-constrained nonconvex composite optimization problems. Each iteration of the AIDAL consists of: (i) inexactly a proximal (AL) subproblem by calling gradient (ACG) subroutine; (ii) applying under-relaxed Lagrange multiplier update; (iii) using novel test to check whether penalty parameter AL function should be increased. Under several mild assumptions involving dampening factor under-relaxation constant, it is shown that generates approximate stationary point constrained problem in $$\mathcal{O}(\varepsilon ^{-5/2}\log \varepsilon ^{-1})$$ iterations ACG subroutine, given tolerance $$\varepsilon >0$$ . Numerical experiments are also show computational efficiency proposed method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An inexact Newton method for nonconvex equality constrained optimization

We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence o...

متن کامل

Inexact accelerated augmented Lagrangian methods

The augmented Lagrangian method is a popular method for solving linearly constrained convex minimization problem and has been used many applications. In recently, the accelerated version of augmented Lagrangian method was developed. The augmented Lagrangian method has the subproblem and dose not have the closed form solution in general. In this talk, we propose the inexact version of accelerate...

متن کامل

Smoothing augmented Lagrangian method for nonsmooth constrained optimization problems

In this paper, we propose a smoothing augmented Lagrangian method for finding a stationary point of a nonsmooth and nonconvex optimization problem. We show that any accumulation point of the iteration sequence generated by the algorithm is a stationary point provided that the penalty parameters are bounded. Furthermore, we show that a weak version of the generalized Mangasarian Fromovitz constr...

متن کامل

An adaptive augmented Lagrangian method for large-scale constrained optimization

We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augment...

متن کامل

An augmented Lagrangian trust region method for equality constrained optimization

In this talk, we present a trust region method for solving equality constrained optimization problems, which is motivated by the famous augmented Lagrangian function. It is different from standard augmented Lagrangian methods where the augmented Lagrangian function is minimized at each iteration. This method, for fixed Lagrange multiplier and penalty parameters, tries to minimize an approximate...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2023

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-023-00464-5